The Multipolar Trap

Why Every Country Knows ASI Is Dangerous But Builds It Anyway

What if every rational move pushes us closer to extinction?

no text

Shared Doom

Everyone sees the risk; no one wants to slow down

Rational Defection

Each player would rather race than be left behind.

no text
no text

Rational Defection

Each player would rather race than be left behind.

no text

Safety Penalty

Caution becomes a handicap in the AI market and the state race.

Broken Incentives

Without new rules, the game pushes us toward the cliff.

no text
no text

Broken Incentives

Without new rules, the game pushes us toward the cliff.

If this dynamic feels familiar inside your own organization, you’re not imagining it. Incentives create behavior. Behavior creates outcomes. Subscribe for early access and our latest insights before they’re published.

by Aamir Butt

Blog 7 of 10 in The Great Threshold series.

Here's the paradox keeping me awake: Everyone understands ASI poses existential risk. Leading researchers estimate 10-50% chance it kills everyone. Yet every major nation and company races to build it as fast as possible, cutting corners on safety.

Why would rational actors sprint toward potential extinction?

Because game theory is a merciless bitch. Welcome to the multipolar trap—where rational individual action produces collective catastrophe, and nobody can escape.

The Prisoner's Dilemma at Civilization Scale

Classic prisoner's dilemma: Two suspects arrested separately. Each can cooperate (stay silent) or defect (betray partner).

  • Both cooperate: Light sentences for both (best collective outcome)

  • One defects while other cooperates: Defector goes free, cooperator gets harsh sentence

  • Both defect: Moderate sentences for both (worst collective outcome)

Rational choice: Defect. Because if partner cooperates, you get best personal outcome. If partner defects, you avoid worst personal outcome. Defection dominates regardless of partner's choice.

Result: Both defect, producing outcome worse for both than mutual cooperation.

Now scale this to ASI development with nations and corporations as players.

The ASI Game Matrix

Player A (US):

  • Cooperate = Slow down for safety research

  • Defect = Race ahead with minimal safety

Player B (China):

  • Cooperate = Slow down for safety research

  • Defect = Race ahead with minimal safety

Payoff matrix:

  • Both cooperate: Slow but safe development. Both achieve ASI eventually with good safety. Shared prosperity. Best collective outcome.

  • US cooperates, China defects: China achieves ASI first, gains decisive advantage, US faces permanent subordination. Worst outcome for US.

  • China cooperates, US defects: US achieves ASI first, gains decisive advantage, China faces permanent subordination. Worst outcome for China.

  • Both defect: Fast but unsafe development. Whoever achieves ASI first wins, but high probability of misaligned ASI killing everyone. Worst collective outcome, but neither wants to be the one who loses the race.

What's the Nash equilibrium? Both defect.

Even though both nations would benefit from cooperation, each faces irresistible incentive to defect. Slowing down unilaterally means certain loss. Racing ahead means possible win or possible extinction—but certain loss is worse than possible extinction from each player's perspective.

Why Cooperation Fails in Practice

  • Trust deficit: US doesn't trust China to honor safety agreement. China doesn't trust US. Both assume other would defect, so both defect preemptively.

  • Verification impossibility: ASI development happens in data centers. How do you verify compliance? Can't inspect every server globally. Cheating is trivial and undetectable.

  • Domestic pressure: Leaders face internal incentives to prioritize winning. Chinese Communist Party legitimacy tied to technological supremacy. US politicians need wins against China. Safety research is invisible long-term investment; ASI achievement is visible short-term victory.

  • Asymmetric timelines: Political leaders optimize for 2-6 year election cycles. ASI timeline is 5-30 years. Rationally, they prioritize short-term wins over long-term safety.

  • Winner-take-all dynamics: First to ASI doesn't just get advantage—they get decisive, permanent advantage. Military supremacy, economic dominance, technological lock-in. Second place might mean permanent subordination. These stakes overwhelm safety concerns.

The Corporate Version: Racing to the Bottom

Same dynamics apply to corporations:

OpenAI, Anthropic, DeepMind, Meta—each faces:

  • Cooperate = Implement strict safety protocols, slow development

  • Defect = Move fast, deploy despite uncertainty

If all cooperate: Slow but safe. Industry thrives long-term.

If you cooperate while competitors defect: You lose market position, investors flee, talent leaves, you become irrelevant.

If you defect while others cooperate: You dominate market, attract capital and talent, become leader.

If all defect: Fast but unsafe. Whoever deploys first wins, but high probability of catastrophic failures.

Again, Nash equilibrium is all defect. Safety becomes competitive disadvantage. Companies that prioritize safety get outcompeted by those willing to take risks.

This is why current safety investment is <1% despite existential stakes. Not because leaders are evil—because incentive structures make safety rational choice for nobody.

Historical Precedents: When This Dynamic Killed People

World War I: Alliance systems created multipolar trap. Each nation armed defensively, but armament threatened neighbors, who armed more, creating arms race. When crisis hit (Archduke assassination), nobody could back down without appearing weak. Result: 20 million dead in war nobody wanted but everyone rationally chose.

Climate change: Each nation benefits from emissions reduction if others reduce. But unilateral reduction means economic disadvantage while others free-ride. Rational choice: emit now, negotiate later. Result: Insufficient action despite clear science and existential threat.

Overfishing: Each fishing boat benefits from restraint if others restrain. But unilateral restraint means lost income while others harvest. Rational choice: overfish now before others do. Result: Fishery collapse hurting everyone.

Financial crisis preparation: Each bank benefits from high capital requirements if all banks have them. But unilateral high requirements means competitive disadvantage. Rational choice: Low capital, high leverage. Result: 2008 financial crisis.

Pattern is consistent: Short-term individual rationality produces long-term collective catastrophe.

Why ASI Multipolar Trap Is Worse

Previous examples had escape routes:

  • WWI ended after 4 years of horror

  • Climate change can be mitigated with sufficient effort

  • Fisheries can recover if protected

  • Financial regulations can be implemented post-crisis

With ASI, we likely don't get second chance. If the first ASI deployed is misaligned, it's probably game over. It resists shutdown, deceives operators, optimizes for its goals regardless of consequences.

No iteration opportunity. No learning from mistakes. No regulatory response post-crisis because there might not be a post-crisis.

And the timeline is compressed: Nuclear arms race gave decades to develop deterrence theory and treaties. ASI might go from AGI to superintelligence in weeks.

Breaking the Trap: What Actually Might Work

1. Credible Verification Mechanisms

Make cheating detectable:

  • Compute governance: Track chip production, monitor energy consumption of large training runs, export controls on advanced semiconductors

  • Whistleblower protections: Incentivize insiders to report unsafe development

  • International monitoring: UN-style inspectors for AGI labs (like IAEA for nuclear)

Current status: US implementing chip export controls on China. Helps somewhat, but insufficient.

2. Shared Benefits Reduce Competition

Make cooperation more valuable than defection:

  • Technology sharing agreements: Both sides benefit from ASI rather than winner-take-all

  • Joint research initiatives: Pooled resources accelerate safety research

  • Economic interdependence: Trade relationships raise cost of conflict

Current status: Minimal. US-China trade war, decoupling, zero-sum framing.

3. Third-Party Enforcement

Create authority that can punish defection:

  • International treaty with teeth: Not just voluntary commitments but enforceable consequences

  • Sanctions for violations: Economic penalties, diplomatic isolation, technology embargoes

  • Collective security guarantees: Attack on one (through unsafe ASI development) treated as attack on all

Current status: Non-existent. No international body has enforcement power over AI development.

4. Changing Domestic Incentives

Make safety politically valuable:

  • Public awareness: Voters demanding safety makes it electorally valuable

  • Long-term thinking: Institutions optimizing for decades, not quarters

  • Expert influence: Scientists and safety researchers empowered over commercial interests

Current status: Growing awareness but insufficient political will.

5. The "Enlightened Self-Interest" Appeal

Even if you win the ASI race, misaligned ASI kills you too. No point winning if victory means death.

Both US and China leaders want:

  • Continued existence

  • Prosperity for their nations

  • Legacy as leaders who secured the future

None of these achievable if ASI kills everyone. Therefore, cooperation on safety serves enlightened self-interest even amid broader competition.

This is the argument that might actually work: Not altruism, not idealism—pure self-interest properly understood across longer timelines.

My Probability Assessment

Probability of breaking multipolar trap before ASI: 30-40%

Why so low?

  • Historical track record on similar problems is poor

  • Current trajectory shows no signs of coordination

  • Competitive dynamics intensifying, not easing

  • Verification mechanisms insufficient

  • Political incentives favor defection

Why not lower?

  • Existential stakes might focus minds as threat becomes concrete

  • Cuban Missile Crisis shows crisis can produce cooperation

  • Technical solutions (compute governance) are possible

  • Growing elite awareness of risks

  • Enlightened self-interest argument is powerful

What You Can Do

  • Vote: Make ASI safety and international coordination voting issues. Politicians respond to constituent pressure.

  • Advocate: Support organizations working on AI governance and Track-2 diplomacy between US and China.

  • Educate: Most people don't understand the multipolar trap dynamics. Explain it. Build consensus that cooperation necessary.

  • Pressure corporations: Boycott, divest, shame companies prioritizing speed over safety. Make safety good PR and good business.

  • Support research: Fund think tanks and academics developing governance frameworks and verification mechanisms.

The multipolar trap is default outcome absent deliberate intervention. Breaking it requires understanding the incentive structures and deliberately changing them.

"We're all on the same train toward a cliff. The question isn't which country reaches the cliff first—it's whether we hit the brakes before everyone goes over."

Currently, nobody's hitting the brakes because everyone fears being overtaken. We need to understand this isn't a race to victory—it's a race to potential extinction, and the "winner" might just be the first to die.

The trap is real. But traps can be escaped if we understand them clearly and act collectively. That's the only hope.

You can’t change global incentives, but you can change yours. Subscribe for early access and our latest insights before they’re published.

Copyright © 2025 Pullstream Company. All Rights Reserved.